Online Continual Learning with Contrastive Vision Transformer

نویسندگان

چکیده

Online continual learning (online CL) studies the problem of sequential tasks from an online data stream without task boundaries, aiming to adapt new while alleviating catastrophic forgetting on past tasks. This paper proposes a framework Contrastive Vision Transformer (CVT), which designs focal contrastive strategy based transformer architecture, achieve better stability-plasticity trade-off for CL. Specifically, we design external attention mechanism CL that implicitly captures previous tasks’ information. Besides, CVT contains learnable focuses each class, could accumulate knowledge classes alleviate forgetting. Based focuses, loss rebalance between and consolidate previously learned representations. Moreover, dual-classifier structure decoupling current balancing all observed classes. The extensive experimental results show our approach achieves state-of-the-art performance with even fewer parameters benchmarks effectively alleviates

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Tuning of Continual Online Exploration in Reinforcement Learning

This paper presents a framework allowing to tune continual exploration in an optimal way. It first quantifies the rate of exploration by defining the degree of exploration of a state as the probability-distribution entropy for choosing an admissible action. Then, the exploration/exploitation tradeoff is stated as a global optimization problem: find the exploration strategy that minimizes the ex...

متن کامل

Online Learning for Robot Vision

In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and ex...

متن کامل

Continual Reinforcement Learning with Complex Synapses

Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a...

متن کامل

Learning to Forget: Continual Prediction with LSTM

Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the ...

متن کامل

Variational Continual Learning

This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and enti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-20044-1_36